最近的论文开发了CP和张量环分解的交替正方形(ALS)方法,其均值成本是sublinear,在低级别分解的输入张量输入量中是sublinear。在本文中,我们提出了基于抽样的ALS方法,用于CP和张量环分解,其成本没有指数级的依赖性,从而显着改善了先前的最先前。我们提供详细的理论分析,并在特征提取实验中应用这些方法。
translated by 谷歌翻译
数据挖掘中的许多基本问题可以减少到一个或多个NP-Colly组合优化问题。诸如量子和量子启发硬件的新技术的最新进展承诺,与使用通用计算机相比,诸如使用通用计算机而且需要以特殊形式进行建模的问题,例如以特殊形式建模的问题,例如诸如ising或二次无约会二进制优化的问题,以解决这些问题的大量加速(qubo)模型,以利用这些设备。在这项工作中,我们专注于重要的二进制矩阵分解(BMF)问题,这些问题在数据挖掘中具有许多应用。我们为BMF提出了两种QubBo配方。我们展示了如何容易地将聚类约束纳入这些配方。我们考虑的特殊用途硬件有限于它可以处理的变量数量,这在分解大矩阵时呈现出挑战。我们提出了一种基于采样的方法来克服这一挑战,允许我们分解大型矩形矩阵。除了这些方法之外,我们还提出了一种简单的基线算法,这些算法优于我们在几种情况下更复杂的方法。我们在富士通数字退火器中运行实验,在合成和实数据上,包括基因表达数据的量子启发的互补金属氧化物半导体(CMOS)退火器。这些实验表明,我们的方法能够生产比竞争方法更准确的BMF。
translated by 谷歌翻译
This paper presents our solutions for the MediaEval 2022 task on DisasterMM. The task is composed of two subtasks, namely (i) Relevance Classification of Twitter Posts (RCTP), and (ii) Location Extraction from Twitter Texts (LETT). The RCTP subtask aims at differentiating flood-related and non-relevant social posts while LETT is a Named Entity Recognition (NER) task and aims at the extraction of location information from the text. For RCTP, we proposed four different solutions based on BERT, RoBERTa, Distil BERT, and ALBERT obtaining an F1-score of 0.7934, 0.7970, 0.7613, and 0.7924, respectively. For LETT, we used three models namely BERT, RoBERTa, and Distil BERTA obtaining an F1-score of 0.6256, 0.6744, and 0.6723, respectively.
translated by 谷歌翻译
Are extralinguistic signals such as image pixels crucial for inducing constituency grammars? While past work has shown substantial gains from multimodal cues, we investigate whether such gains persist in the presence of rich information from large language models (LLMs). We find that our approach, LLM-based C-PCFG (LC-PCFG), outperforms previous multi-modal methods on the task of unsupervised constituency parsing, achieving state-of-the-art performance on a variety of datasets. Moreover, LC-PCFG results in an over 50% reduction in parameter count, and speedups in training time of 1.7x for image-aided models and more than 5x for video-aided models, respectively. These results challenge the notion that extralinguistic signals such as image pixels are needed for unsupervised grammar induction, and point to the need for better text-only baselines in evaluating the need of multi-modality for the task.
translated by 谷歌翻译
We present Masked Audio-Video Learners (MAViL) to train audio-visual representations. Our approach learns with three complementary forms of self-supervision: (1) reconstruction of masked audio and video input data, (2) intra- and inter-modal contrastive learning with masking, and (3) self-training by reconstructing joint audio-video contextualized features learned from the first two objectives. Pre-training with MAViL not only enables the model to perform well in audio-visual classification and retrieval tasks but also improves representations of each modality in isolation, without using information from the other modality for fine-tuning or inference. Empirically, MAViL sets a new state-of-the-art on AudioSet (53.1 mAP) and VGGSound (67.1% accuracy). For the first time, a self-supervised audio-visual model outperforms ones that use external supervision on these benchmarks. Code will be available soon.
translated by 谷歌翻译
Conventional cameras capture image irradiance on a sensor and convert it to RGB images using an image signal processor (ISP). The images can then be used for photography or visual computing tasks in a variety of applications, such as public safety surveillance and autonomous driving. One can argue that since RAW images contain all the captured information, the conversion of RAW to RGB using an ISP is not necessary for visual computing. In this paper, we propose a novel $\rho$-Vision framework to perform high-level semantic understanding and low-level compression using RAW images without the ISP subsystem used for decades. Considering the scarcity of available RAW image datasets, we first develop an unpaired CycleR2R network based on unsupervised CycleGAN to train modular unrolled ISP and inverse ISP (invISP) models using unpaired RAW and RGB images. We can then flexibly generate simulated RAW images (simRAW) using any existing RGB image dataset and finetune different models originally trained for the RGB domain to process real-world camera RAW images. We demonstrate object detection and image compression capabilities in RAW-domain using RAW-domain YOLOv3 and RAW image compressor (RIC) on snapshots from various cameras. Quantitative results reveal that RAW-domain task inference provides better detection accuracy and compression compared to RGB-domain processing. Furthermore, the proposed \r{ho}-Vision generalizes across various camera sensors and different task-specific models. Additional advantages of the proposed $\rho$-Vision that eliminates the ISP are the potential reductions in computations and processing times.
translated by 谷歌翻译
Graph neural networks have shown to learn effective node representations, enabling node-, link-, and graph-level inference. Conventional graph networks assume static relations between nodes, while relations between entities in a video often evolve over time, with nodes entering and exiting dynamically. In such temporally-dynamic graphs, a core problem is inferring the future state of spatio-temporal edges, which can constitute multiple types of relations. To address this problem, we propose MTD-GNN, a graph network for predicting temporally-dynamic edges for multiple types of relations. We propose a factorized spatio-temporal graph attention layer to learn dynamic node representations and present a multi-task edge prediction loss that models multiple relations simultaneously. The proposed architecture operates on top of scene graphs that we obtain from videos through object detection and spatio-temporal linking. Experimental evaluations on ActionGenome and CLEVRER show that modeling multiple relations in our temporally-dynamic graph network can be mutually beneficial, outperforming existing static and spatio-temporal graph neural networks, as well as state-of-the-art predicate classification methods.
translated by 谷歌翻译
Deep learning models require an enormous amount of data for training. However, recently there is a shift in machine learning from model-centric to data-centric approaches. In data-centric approaches, the focus is to refine and improve the quality of the data to improve the learning performance of the models rather than redesigning model architectures. In this paper, we propose CLIP i.e., Curriculum Learning with Iterative data Pruning. CLIP combines two data-centric approaches i.e., curriculum learning and dataset pruning to improve the model learning accuracy and convergence speed. The proposed scheme applies loss-aware dataset pruning to iteratively remove the least significant samples and progressively reduces the size of the effective dataset in the curriculum learning training. Extensive experiments performed on crowd density estimation models validate the notion behind combining the two approaches by reducing the convergence time and improving generalization. To our knowledge, the idea of data pruning as an embedded process in curriculum learning is novel.
translated by 谷歌翻译
Density estimation is one of the most widely used methods for crowd counting in which a deep learning model learns from head-annotated crowd images to estimate crowd density in unseen images. Typically, the learning performance of the model is highly impacted by the accuracy of the annotations and inaccurate annotations may lead to localization and counting errors during prediction. A significant amount of works exist on crowd counting using perfectly labelled datasets but none of these explore the impact of annotation errors on the model accuracy. In this paper, we investigate the impact of imperfect labels (both noisy and missing labels) on crowd counting accuracy. We propose a system that automatically generates imperfect labels using a deep learning model (called annotator) which are then used to train a new crowd counting model (target model). Our analysis on two crowd counting models and two benchmark datasets shows that the proposed scheme achieves accuracy closer to that of the model trained with perfect labels showing the robustness of crowd models to annotation errors.
translated by 谷歌翻译
The rapid outbreak of COVID-19 pandemic invoked scientists and researchers to prepare the world for future disasters. During the pandemic, global authorities on healthcare urged the importance of disinfection of objects and surfaces. To implement efficient and safe disinfection services during the pandemic, robots have been utilized for indoor assets. In this paper, we envision the use of drones for disinfection of outdoor assets in hospitals and other facilities. Such heterogeneous assets may have different service demands (e.g., service time, quantity of the disinfectant material etc.), whereas drones have typically limited capacity (i.e., travel time, disinfectant carrying capacity). To serve all the facility assets in an efficient manner, the drone to assets allocation and drone travel routes must be optimized. In this paper, we formulate the capacitated vehicle routing problem (CVRP) to find optimal route for each drone such that the total service time is minimized, while simultaneously the drones meet the demands of each asset allocated to it. The problem is solved using mixed integer programming (MIP). As CVRP is an NP-hard problem, we propose a lightweight heuristic to achieve sub-optimal performance while reducing the time complexity in solving the problem involving a large number of assets.
translated by 谷歌翻译